32 research outputs found

    ASL Citizen: A Community-Sourced Dataset for Advancing Isolated Sign Language Recognition

    Full text link
    Sign languages are used as a primary language by approximately 70 million D/deaf people world-wide. However, most communication technologies operate in spoken and written languages, creating inequities in access. To help tackle this problem, we release ASL Citizen, the first crowdsourced Isolated Sign Language Recognition (ISLR) dataset, collected with consent and containing 83,399 videos for 2,731 distinct signs filmed by 52 signers in a variety of environments. We propose that this dataset be used for sign language dictionary retrieval for American Sign Language (ASL), where a user demonstrates a sign to their webcam to retrieve matching signs from a dictionary. We show that training supervised machine learning classifiers with our dataset advances the state-of-the-art on metrics relevant for dictionary retrieval, achieving 63% accuracy and a recall-at-10 of 91%, evaluated entirely on videos of users who are not present in the training or validation sets. An accessible PDF of this article is available at the following link: https://aashakadesai.github.io/research/ASLCitizen_arxiv_updated.pd

    The Promise and Peril of Parallel Chat in Video Meetings for Work

    Get PDF
    We report the opportunities and challenges of parallel chat in workrelated video meetings, drawing on a study of Microsoft employees’ remote meeting experiences during the COVID-19 pandemic. We find that parallel chat allows groups to communicate flexibly without interrupting the main conversation, coordinate action around shared resources, and also improves inclusivity. On the other hand, parallel chat can also be distracting, overwhelming, and cause information asymmetries. Further, we find that whether an individual views parallel chat as a net positive in meetings is subject to the complex interactions between meeting type, personal habits, and intentional group practices. We suggest opportunities for tools and practices to capitalise on the strengths of parallel chat and mitigate its weaknesses

    Crowdsourcing the Perception of Machine Teaching

    Full text link
    Teachable interfaces can empower end-users to attune machine learning systems to their idiosyncratic characteristics and environment by explicitly providing pertinent training examples. While facilitating control, their effectiveness can be hindered by the lack of expertise or misconceptions. We investigate how users may conceptualize, experience, and reflect on their engagement in machine teaching by deploying a mobile teachable testbed in Amazon Mechanical Turk. Using a performance-based payment scheme, Mechanical Turkers (N = 100) are called to train, test, and re-train a robust recognition model in real-time with a few snapshots taken in their environment. We find that participants incorporate diversity in their examples drawing from parallels to how humans recognize objects independent of size, viewpoint, location, and illumination. Many of their misconceptions relate to consistency and model capabilities for reasoning. With limited variation and edge cases in testing, the majority of them do not change strategies on a second training attempt.Comment: 10 pages, 8 figures, 5 tables, CHI2020 conferenc

    The Semantic Reader Project: Augmenting Scholarly Documents through AI-Powered Interactive Reading Interfaces

    Full text link
    Scholarly publications are key to the transfer of knowledge from scholars to others. However, research papers are information-dense, and as the volume of the scientific literature grows, the need for new technology to support the reading process grows. In contrast to the process of finding papers, which has been transformed by Internet technology, the experience of reading research papers has changed little in decades. The PDF format for sharing research papers is widely used due to its portability, but it has significant downsides including: static content, poor accessibility for low-vision readers, and difficulty reading on mobile devices. This paper explores the question "Can recent advances in AI and HCI power intelligent, interactive, and accessible reading interfaces -- even for legacy PDFs?" We describe the Semantic Reader Project, a collaborative effort across multiple institutions to explore automatic creation of dynamic reading interfaces for research papers. Through this project, we've developed ten research prototype interfaces and conducted usability studies with more than 300 participants and real-world users showing improved reading experiences for scholars. We've also released a production reading interface for research papers that will incorporate the best features as they mature. We structure this paper around challenges scholars and the public face when reading research papers -- Discovery, Efficiency, Comprehension, Synthesis, and Accessibility -- and present an overview of our progress and remaining open challenges

    Expanding Information Access through Data-Driven Design

    No full text
    Thesis (Ph.D.)--University of Washington, 2018Computer scientists have made progress on many problems in information access: curating large datasets, developing machine learning and computer vision, building extensive networks, and designing powerful interfaces and graphics. However, we sometimes fail to fully leverage these modern techniques, especially when building systems inclusive of people with disabilities (who total a billion worldwide [264], and nearly one in five in the U.S. [39]). For example, visual graphics and small text may exclude people with visual impairments, and text-based resources like search engines and text editors may not fully support people using unwritten sign languages. In this dissertation, I argue that if we are willing to break with traditional modes of information access, we can leverage modern computing capabilities from computer graphics, crowdsourcing, topic modeling, and participatory design to greatly improve and enrich access. This dissertation demonstrates this potential for expanded access through four systems I built as a Ph.D. student: 1) Smartfonts (Chapter 3), scripts that reimagine the alphabet’s appearance to improve legibility and enrich text displays, leveraging modern screens’ ability to render letterforms not easily hand-drawn, 2) Livefonts (Chapter 4), a subset of Smartfonts that use animation to further differentiate letterforms, leveraging modern screens’ animation capability to redefine limitations on letterform design, 3) Animated Si5s, the first animated character system prototype for American Sign Language (ASL) (Chapter 5), which uses animation to increase text resemblance to live signing and in turn improve understandability without training, and 4) ASL-Search, an ASL dictionary trained on crowdsourced data collected from volunteer ASL students through ASL-Flash, a novel crowdsourcing platform and educational tool (Chapter 6). These systems employ quantitative methods, using large-scale data collected through existing and novel crowdsourcing platforms to explore design spaces and solve data scarcity problems. They also use human-centered approaches to better understand and address usability problems
    corecore